Goto

Collaborating Authors

 asilomar ai principle


Asilomar AI Principles: Ethics to Guide a Top-Down Control Regime

#artificialintelligence

Get 1,200 artificial intelligence (AI) researchers and 2,500 other businesspeople and academics, such as Elon Musk, Stephen Hawking, Ray Kurzweil, and David Chalmers, to all endorse one document about AI ethics. You have the Asilomar AI Principles with serious sound bite power: Experts agree on a humanistic AI ethics program! Do the Principles advance a worthy cause? Reading the text of the Asilomar Principles, however, you get a few vague ethical aspirations offered to guide a top-down control regime. The points do it subtly, so as the holographic Dr. Lanning advised in I, Robot (2004), "you have to ask the right questions."

  Country:
  Industry: Law (0.98)

Ethical AI - Responsible AI best practices

#artificialintelligence

Absolute Reproducibility means a guarantee that any and all results, outputs, outcomes, artifacts, etc can be exactly reproduced under any circumstances. Adversarial Action means actions characterised by mala fide (malicious) intent and/or bad faith. Assessment means the action or process of making a series of determinations and judgments after taking deliberate steps to test, measure and collectively deliberate the objects of concern and their outcomes. Assets means information technology hardware that concerns Products Machine Learning. Best Practice Guideline means this document. Business Stakeholders means the departments and/or teams within the Organisation who do not conduct data science and/or technical Machine Learning, but have a material interest in Products Machine Learning.


MTP for Machine Learning Systems -- ExO Economy

#artificialintelligence

OpenExO Community member Christiaan Dorfling posted a fascinating question about MTP's and Machine learned models. We decided to share the answer and here's a link back to the original post OpenExo Ecosystem-Community-Circles. You will need an account on the platform to get to the full thread. I'd have to start with what is the companies MTP? Someone or some organization is behind it. If they don't have an MTP or they are just bad then their ML uses cases are right in line w/ their MTP.


These rules could save humanity from the threat of rogue AI

#artificialintelligence

The possibility of man-made machines turning against their creators has become a trendy topic these days. Undoubtedly, Isaac Asimov's Three Laws of Robotics are no longer fit for purpose. For the sake of the global public good, we need something serious and more specific to safeguard our limitless ambitions - and humanity itself. Today, the internet connects more than half the world's population. And although the internet provides us with convenience and efficiency, it also brings threats. This is especially true in an age in which a good deal of our daily life is driven by big data and artificial intelligence.


Ethically Hacking The 21st century: How To Own The Future Driven By Artificial Intelligence By Understanding Guiding AI Principles Agreed On By Top Researchers In Asilomar, Carlifonia - MMIMMC

#artificialintelligence

Recently, in cognizance of this seismic shift, the world's top AI researchers met in Asilomar, California to deliberate on AI principles and goals. In doing so, this eminent artificial intelligence society gifted humanity a framework of how to own the future. It is only by navigating AI ethical dilemmas, that we will avail the life saving technologies of applied artificial intelligence. The EU in its Responsible Research and Innovation initiative calls for investment in legal, social and ethics [LSE] research. Investment in LSE research will generate knowledge that can match artificial intelligence goals and society's needs.


Book Summary: Life 3.0: Being Human in the Age of A.I. by Max Tegmark

#artificialintelligence

The authors purpose for this book is to acknowledge this uncertainty and prompts us to collectively make some choices now. The book begins with a prelude which is a story of a possible near-future. I found it so fascinating that I have copied it out in full (it's just over 6000 words so its a 20 minute read). Have you seen Netflix's "Black Mirror"? The prelude reminds me of an episode of that show which explores possible technological futures which are frighteningly plausible.


Robohub Digest 02/17: Asilomar AI principles, robot tax, drone art and Super Bowl LI

Robohub

A quick, hassle-free way to stay on top of robotics news, our robotics digest is released on the first Monday of every month. Sign up to get it in your inbox. February is only just gone, and already 2017 is shaping up to be a year full of big ideas and ambitions. The Future of Life Institute, for example, just published the Asilomar AI principles: 23 guidelines to ensure AI developments are beneficial to humanity. They are calling for shared responsibility and caution against an AI arms race.


Guidelines for Preventing an AI Takeover Endorsed by Musk and Hawking

#artificialintelligence

Two of modern science's most powerful voices, Elon Musk and Stephen Hawking, have both issued warnings about the dangers of artificial intelligence in the past (Musk has even been tinkering with ways humanity can augment themselves to keep up). But good news: Musk and Hawking are jumping onboard the ethical AI bandwagon. In an open letter published by Future of Life Institute (FLI) last Monday, Musk and Hawking joined several AI and robotics researchers in a comprehensive outline called "Asilomar AI principles" - 23 guidelines for avoiding an artificial intelligence armageddon. The goal is to guide AI research toward beneficial intelligence rather than "undirected intelligence." The principles are the product of the FLI's 2017 Beneficial AI conference.


These 23 Principles Could Help Us Avoid an AI Apocalypse

#artificialintelligence

Science fiction author Isaac Asimov famously predicted that we'll one day have to program robots with a set of laws that protect us from our mechanical creations. But before we get there, we need rules to ensure that, at the most fundamental level, we're developing AI responsibly and safely. At a recent gathering, a group of experts did just that, coming up with 23 principles to steer the development of AI in a positive direction--and to ensure it doesn't destroy us. The new guidelines, dubbed the 23 Asilomar AI Principles, touch upon issues pertaining to research, ethics, and foresight--from research strategies and data rights to transparency issues and the risks of artificial superintelligence. Previous attempts to establish AI guidelines, including efforts by the IEEE Standards Association, Stanford University's AI100 Standing Committee, and even the White House, were either too narrow in scope, or far too generalized.


Elon Musk and Stephen Hawking warn of artificial intelligence arms race

#artificialintelligence

Stephen Hawking and Elon Musk have joined prominent artificial intelligence researchers in pledging support for principles to protect mankind from machines and a potential AI arms race. An open letter published by the Future of Life Institute (FLI) on Monday outlined the Asilomar AI Principles--23 guidelines to ensure the development of artificial intelligence that is beneficial to humanity. For decades, science fiction writer Isaac Asimov's'Three Laws of Robotics' were a cornerstone for the ethical development of robots and artificial intelligence machines. First laid out in his 1942 short story Runaround, Asimov's three principles stated: A robot must not harm a human through action or inaction; a robot must obey humans; and a robot must protect its own existence. Each rule takes precedence over the rules that follow it in order to ensure a human's life is protected over the existence of a robot.